Finetuning Qwen2.5-3B with Unscloth

Finetuning Qwen2.5-3B with SFT-Lora using Unsloth on TinyStories instruction dataset

Finetuning
LORA
Unsloth
Author

Quang T. Duong

Published

August 24, 2024

Getting started GenAI & LLM with my Udemy course, Hands-on Generative AI Engineering with Large Language Model 👇

🤝 What is

  • Qwen2.5-3B as a pretraine language model to be finetuned. The model has 3.09B parameters that are not too big to fine-tune with Colab. This fine-tuning part will be presented in another post.